Honey, I Shrunk the Sample Covariance Matrix
نویسندگان
چکیده
The central message of this paper is that nobody should be using the sample covariance matrix for the purpose of portfolio optimization. It contains estimation error of the kind most likely to perturb a mean-variance optimizer. In its place, we suggest using the matrix obtained from the sample covariance matrix through a transformation called shrinkage. This tends to pull the most extreme coefficients towards more central values, thereby systematically reducing estimation error where it matters most. Statistically, the challenge is to know the optimal shrinkage intensity, and we give the formula for that. Without changing any other step in the portfolio optimization process, we show on actual stock market data that shrinkage reduces tracking error relative to a benchmark index, and substantially increases the realized information ratio of the active portfolio manager. ∗Phone: +34-93-542 2552, Fax: +34-93-542 1746
منابع مشابه
On Mean Variance Portfolio Optimization: Improving Performance Through Better Use of Hedging Relations
In portfolio optimization, the inverse covariance matrix prescribes the hedge trades where a portfolio of stocks hedges each one with all the other stocks to minimize portfolio risk. In practice with finite samples, however, multicollinearity makes the hedge trades too unstable to be reliable. By reducing the number of stocks in each hedge trade to curb estimation errors, we motivate a “sparse”...
متن کاملOn Asymptotics of Eigenvectors of Large Sample Covariance Matrix
Let {Xij}, i, j = . . . , be a double array of i.i.d. complex random variables with EX11 = 0,E|X11| 2 = 1 and E|X11| 4 <∞, and let An = 1 N T 1/2 n XnX ∗ nT 1/2 n , where T 1/2 n is the square root of a nonnegative definite matrix Tn and Xn is the n×N matrix of the upper-left corner of the double array. The matrix An can be considered as a sample covariance matrix of an i.i.d. sample from a pop...
متن کاملCompression of Breast Cancer Images By Principal Component Analysis
The principle of dimensionality reduction with PCA is the representation of the dataset ‘X’in terms of eigenvectors ei ∈ RN of its covariance matrix. The eigenvectors oriented in the direction with the maximum variance of X in RN carry the most relevant information of X. These eigenvectors are called principal components [8]. Ass...
متن کاملCompression of Breast Cancer Images By Principal Component Analysis
The principle of dimensionality reduction with PCA is the representation of the dataset ‘X’in terms of eigenvectors ei ∈ RN of its covariance matrix. The eigenvectors oriented in the direction with the maximum variance of X in RN carry the most relevant information of X. These eigenvectors are called principal components [8]. Ass...
متن کاملSelecting a Shrinkage Parameter in Structural Equation Modeling with a near Singular Covariance Matrix by the Gic Minimization Method
In structural equation modeling (SEM), a covariance parameter is derived by minimizing the discrepancy between a sample covariance matrix and a covariance matrix having a specified structure. When a sample covariance matrix is a near singular matrix, Yuan and Chan (2008) proposed the use of an adjusted sample covariance matrix instead of the sample covariance matrix in the discrepancy function ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004